A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译
Developing autonomous vehicles (AVs) helps improve the road safety and traffic efficiency of intelligent transportation systems (ITS). Accurately predicting the trajectories of traffic participants is essential to the decision-making and motion planning of AVs in interactive scenarios. Recently, learning-based trajectory predictors have shown state-of-the-art performance in highway or urban areas. However, most existing learning-based models trained with fixed datasets may perform poorly in continuously changing scenarios. Specifically, they may not perform well in learned scenarios after learning the new one. This phenomenon is called "catastrophic forgetting". Few studies investigate trajectory predictions in continuous scenarios, where catastrophic forgetting may happen. To handle this problem, first, a novel continual learning (CL) approach for vehicle trajectory prediction is proposed in this paper. Then, inspired by brain science, a dynamic memory mechanism is developed by utilizing the measurement of traffic divergence between scenarios, which balances the performance and training efficiency of the proposed CL approach. Finally, datasets collected from different locations are used to design continual training and testing methods in experiments. Experimental results show that the proposed approach achieves consistently high prediction accuracy in continuous scenarios without re-training, which mitigates catastrophic forgetting compared to non-CL approaches. The implementation of the proposed approach is publicly available at https://github.com/BIT-Jack/D-GSM
translated by 谷歌翻译
The task of Few-shot learning (FSL) aims to transfer the knowledge learned from base categories with sufficient labelled data to novel categories with scarce known information. It is currently an important research question and has great practical values in the real-world applications. Despite extensive previous efforts are made on few-shot learning tasks, we emphasize that most existing methods did not take into account the distributional shift caused by sample selection bias in the FSL scenario. Such a selection bias can induce spurious correlation between the semantic causal features, that are causally and semantically related to the class label, and the other non-causal features. Critically, the former ones should be invariant across changes in distributions, highly related to the classes of interest, and thus well generalizable to novel classes, while the latter ones are not stable to changes in the distribution. To resolve this problem, we propose a novel data augmentation strategy dubbed as PatchMix that can break this spurious dependency by replacing the patch-level information and supervision of the query images with random gallery images from different classes from the query ones. We theoretically show that such an augmentation mechanism, different from existing ones, is able to identify the causal features. To further make these features to be discriminative enough for classification, we propose Correlation-guided Reconstruction (CGR) and Hardness-Aware module for instance discrimination and easier discrimination between similar classes. Moreover, such a framework can be adapted to the unsupervised FSL scenario.
translated by 谷歌翻译
Deep learning-based physical-layer secret key generation (PKG) has been used to overcome the imperfect uplink/downlink channel reciprocity in frequency division duplexing (FDD) orthogonal frequency division multiplexing (OFDM) systems. However, existing efforts have focused on key generation for users in a specific environment where the training samples and test samples obey the same distribution, which is unrealistic for real world applications. This paper formulates the PKG problem in multiple environments as a learning-based problem by learning the knowledge such as data and models from known environments to generate keys quickly and efficiently in multiple new environments. Specifically, we propose deep transfer learning (DTL) and meta-learning-based channel feature mapping algorithms for key generation. The two algorithms use different training methods to pre-train the model in the known environments, and then quickly adapt and deploy the model to new environments. Simulation results show that compared with the methods without adaptation, the DTL and meta-learning algorithms both can improve the performance of generated keys. In addition, the complexity analysis shows that the meta-learning algorithm can achieve better performance than the DTL algorithm with less time, lower CPU and GPU resources.
translated by 谷歌翻译
链接的语音实体旨在识别和消除语言中的命名实体。常规方法严重遭受了不受限制的语音样式和ASR系统产生的嘈杂笔录。在本文中,我们提出了一种名为“知识增强命名实体识别”(KENER)的新颖方法,该方法致力于通过在实体识别阶段无痛地纳入适当的知识来改善鲁棒性,从而改善实体联系的整体性能。肯纳(Kener)首先检索未提及的句子的候选实体,然后利用实体描述作为额外的信息来帮助识别提及。当输入短或嘈杂时,由密集检索模块检索的候选实体特别有用。此外,我们研究了各种数据采样策略和设计有效的损失功能,以提高识别和歧义阶段中检索实体的质量。最后,将与过滤模块的链接作为最终保障措施应用,从而可以过滤出错误认可的提及。我们的系统在NLPCC-2022共享任务2的轨道1中获得第一名,并在轨道1中获得第一名。
translated by 谷歌翻译
时间序列分类是现实世界中的重要问题。由于其非平稳属性随着时间的推移而变化,因此建立泛化模型以表现出来的分布仍然具有挑战性。在本文中,我们建议从分布的角度查看时间序列分类问题。我们认为时间复杂性归因于其中未知的潜在分布。为此,我们建议多元化学习时间序列分类的广义表示。多元化进行了一个迭代过程:它首先通过对抗训练获得了最坏情况的分布场景,然后与获得的子域的分布匹配。我们还提供了一些理论见解。我们进行有关手势识别,语音命令识别,可穿戴压力和影响检测的实验,以及基于传感器的人类活动识别,在不同的情况下总共有七个数据集。结果表明,多样化的多样化大大优于其他基线,并通过定性和定量分析有效地表征了潜在分布。
translated by 谷歌翻译
过去几十年来,地球观察卫星(EOSS)迅速增加,导致EOSS计划的复杂性日益增加。由于大区域观察的广泛应用,本文旨在解决大型地区目标的EOSS观察计划问题。首先开发了采用投影参考平面和多边形裁剪技术的快速覆盖计算方法。然后,我们为调度问题制定了非线性整数编程模型,其中基于开发的覆盖范围计算方法计算目标函数。提出了一种基于贪婪初始化的重新采样粒子群优化(GI-RPSO)算法来解决该模型。所采用的贪婪初始化策略和粒子重采样方法有助于在进化过程中产生有效的解决方案。最后,进行了广泛的实验,以说明所提出方法的有效性和可靠性。与传统的粒子群优化和广泛使用的贪婪算法相比,所提出的GI-RPSO可以分别提高计划结果5.42%和15.86%。
translated by 谷歌翻译
人工智能(AI)系统在许多领域越来越受欢迎。尽管如此,AI技术仍在开发阶段,并且需要解决许多问题。其中,需要对AI系统进行展示的可靠性,以便AI系统可以充满信心地由公众信任使用。在本文中,我们提供了AI系统可靠性的统计视角。与其他因素不同,AI系统的可靠性专注于时间尺寸。也就是说,系统可以针对预期时段执行其设计的功能。我们为AI可靠性研究引入了所谓的智能统计框架,包括五个组件:系统结构,可靠性度量,故障原因分析,可靠性评估和测试规划。我们审查了可靠性数据分析和软件可靠性的传统方法,并讨论如何为可靠性建模和AI系统进行评估来转换现有方法。我们还描述了最近的建模和分析AI可靠性和概述统计研究挑战的发展,包括分销检测,训练集,对抗攻击,模型准确性和不确定性量化的影响,以及讨论这些主题可以与AI可靠性有关,具有说明性示例。最后,我们讨论了AI可靠性评估的数据收集和测试计划以及如何提高系统设计,以获得更高的AI可靠性。本文结束了一些结论备注。
translated by 谷歌翻译
我们建议在监督回归方案中学习一个不变的因果预测因子,该预测因素对分布变化是可靠的。基于描述潜在数据生成过程的分离的因果分解,我们将分布转移归因于生成因子的突变,该突变涵盖了各种分布转移的情况,因为我们没有在因果结构或因果结构的来源或源突变。在此因果框架下,我们根据操作员确定一组不变预测变量。我们提供了足够的必要条件,使预测变量是最佳最佳的预测因子,即最大程度地减少所有领域中最坏的二次二次损失。在马尔可夫人和忠诚的假设下,这种情况是合理的,因此启发了一种实用算法以识别最佳预测指标。对于经验估计,我们提出了一个以当地因果发现程序为指导的置换式恢复计划。我们方法的效用和有效性在模拟数据和两个现实世界中得到了证明:阿尔茨海默氏病的诊断和基因功能预测。
translated by 谷歌翻译
传统的监督学习方法,尤其是深的学习方法,发现对分发超出(OOD)示例敏感,主要是因为所学习的表示与由于其域特异性相关性的变异因子混合了语义因素,而只有语义因子导致输出。为了解决这个问题,我们提出了一种基于因果推理的因果语义生成模型(CSG),以便分别建模两个因素,以及从单个训练域中的oo ood预测的制定方法,这是常见和挑战的。该方法基于因果不变原理,在变形贝斯中具有新颖的设计,用于高效学习和易于预测。从理论上讲,我们证明,在某些条件下,CSG可以通过拟合训练数据来识别语义因素,并且这种语义识别保证了泛化概率的界限和适应的成功。实证研究表明,改善了卓越的基线表现。
translated by 谷歌翻译